Goto

Collaborating Authors

 msd data





Markov Random Fields for Collaborative Filtering

Steck, Harald

arXiv.org Machine Learning

Collaborative filtering has witnessed significant improvem ents in recent years, largely due to models based on low-dimensional embeddings, like weighted matrix factorizati on (e.g., [26, 39]) and deep learning [23, 22, 33, 47, 62, 58, 20, 11], including autoencoders [58, 33]. Also neighborhoo d-based approaches are competitive in certain regimes (e.g., [1, 53, 54]), despite being simple heuristics based o n item-item (or user-user) similarity matrices (like cosin e similarity). In this paper, we outline that Markov Random Fi elds (MRF) are closely related to autoencoders as well as to neighborhood-based approaches. W e build on the enormo us progress made in learning MRFs, in particular in sparse inverse covariance estimation (e.g., [36, 59, 15, 2, 60, 44, 45, 63, 55, 24, 25, 52, 56, 51]). Much of the literature on sparse inverse covariance estimation focuses on the regi me where the number of data points n is much smaller than the number of variables m in the model ( n m).


Embarrassingly Shallow Autoencoders for Sparse Data

Steck, Harald

arXiv.org Machine Learning

Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments.